Question

[Solved] ‘invalid value encountered in double_scalars’ warning, possibly numpy

As I run my code I get these warnings, always in groups of four, sporadically. I have tried to locate the source by placing debug messages before and after certain statements to pin-point its origin.

Warning: invalid value encountered in double_scalars
Warning: invalid value encountered in double_scalars
Warning: invalid value encountered in double_scalars
Warning: invalid value encountered in double_scalars

Is this is a Numpy warning, and what is a double scalar?

From Numpy I use

min(), argmin(), mean() and random.randn()

I also use Matplotlib

Enquirer: Theodor

||

Solution #1:

It looks like a floating-point calculation error. Check the numpy.seterr function to get more information about where it happens.

Respondent: eumiro

Solution #2:

In my case, I found out it was division by zero.

Respondent: Volod

Solution #3:

Sometimes NaNs or null values in data will generate this error with Numpy. If you are ingesting data from say, a CSV file or something like that, and then operating on the data using numpy arrays, the problem could have originated with your data ingest. You could try feeding your code a small set of data with known values, and see if you get the same result.

Respondent: Jeff

Solution #4:

Zero-size array passed to numpy.mean raises this warning (as indicated in several comments).

For some other candidates:

  • median also raises this warning on zero-sized array.

other candidates do not raise this warning:

  • min,argmin both raise ValueError on empty array
  • randn takes *arg; using randn(*[]) returns a single random number
  • std,var return nan on an empty array
Respondent: Dave

Solution #5:

I ran into similar problem – Invalid value encountered in … After spending a lot of time trying to figure out what is causing this error I believe in my case it was due to NaN in my dataframe. Check out working with missing data in pandas.

None == None
True

np.nan == np.nan
False

When NaN is not equal to NaN then arithmetic operations like division and multiplication causes it throw this error.

Couple of things you can do to avoid this problem:

  1. Use pd.set_option to set number of decimal to consider in your analysis so an infinitesmall number does not trigger similar problem – (‘display.float_format’, lambda x: ‘%.3f’ % x).

  2. Use df.round() to round the numbers so Panda drops the remaining digits from analysis. And most importantly,

  3. Set NaN to zero df=df.fillna(0). Be careful if Filling NaN with zero does not apply to your data sets because this will treat the record as zero so N in the mean, std etc also changes.

Respondent: S_Dhungel

Solution #6:

Whenever you are working with csv imports, try to use df.dropna() to avoid all such warnings or errors.

Respondent: Abhinav Bangia

Solution #7:

I encount this while I was calculating np.var(np.array([])). np.var will divide size of the array which is zero in this case.

Respondent: ???

The answers/resolutions are collected from stackoverflow, are licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 .

Most Popular

To Top
India and Pakistan’s steroid-soaked rhetoric over Kashmir will come back to haunt them both clenbuterol australia bossier man pleads guilty for leadership role in anabolic steriod distribution conspiracy