I am using SciPy’s boxcox function to perform a Box-Cox transformation on a continuous variable.

from scipy.stats import boxcox
import numpy as np
y = np.random.random(100)
y_box, lambda_ = ss.boxcox(y + 1) # Add 1 to be able to transform 0 values

Then, I fit a statistical model to predict the values of this Box-Cox transformed variable. The model predictions are in the Box-Cox scale and I want to transform them to the original scale of the variable.

from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor()
X = np.random.random((100, 100))
rf.fit(X, y_box)
pred_box = rf.predict(X)

However, I can’t find a SciPy function that performs a reverse Box-Cox transformation given transformed data and lambda. Is there such a function? I coded an inverse transformation for now.

pred_y = np.power((y_box * lambda_) + 1, 1 / lambda_) - 1

SciPy has added an inverse Box-Cox transformation.


scipy.special.inv_boxcox(y, lmbda) =

Compute the inverse of the Box-Cox transformation.

Find x such that:

y = (x**lmbda - 1) / lmbda  if lmbda != 0
    log(x)                  if lmbda == 0

y : array_like

Data to be transformed.

lmbda : array_like

Power parameter of the Box-Cox transform.

x : array

Transformed data.


New in version 0.16.0.


from scipy.special import boxcox, inv_boxcox
y = boxcox([1, 4, 10], 2.5)
inv_boxcox(y, 2.5)

output: array([1., 4., 10.])

  1. Here it is the code. It is working and just test. Scipy used neperian logarithm, i check the BoxCox transformation paper and it seens that they used log10. I kept with neperian, because it works with scipy
  2. Follow the code:

    def invboxcox(y,ld):
       if ld == 0:
    # Test the code
    ld = 0
    y = stats.boxcox(x,ld)
    print invboxcox(y[0],ld)

Thanks to @Warren Weckesser, I’ve learned that the current implementation of SciPy does not have a function to reverse a Box-Cox transformation. However, a future SciPy release may have this function. For now, the code I provide in my question may serve others to reverse Box-Cox transformations.

In order to inverse the boxcox transformation from scipy.stats.boxcox using scipy.special.inv_boxcox you have to identify the lambda which was generated.

First apply the transformation and print the lambda (ie. param).

df[feature_boxcox], param = stats.boxcox(df[feature])
print('Optimal lambda', param)

Then in order to inverse the transformation you input the generated lambda.

inv_boxcox(df[feature_boxcox], param)

I recommend to look at Yeo-Johnson transformation, which is Box-Cox analog, but work with negative values and has been well implemented in scikit-learn library with easy reverse transformation.

I’m using it with fbprophet library (forecasting):

from sklearn.preprocessing import PowerTransformer

from fbprophet import Prophet
from fbprophet.plot import plot_cross_validation_metric
from fbprophet.diagnostics import cross_validation
from fbprophet.diagnostics import performance_metrics
import numpy as np
import pandas as pd

def inverse_transform(df, pt_instance, features):
    for feature in features:
        df[feature] = pt_instance.inverse_transform(np.array(df[feature]).reshape(-1,1))
    return df

pt = PowerTransformer(method='yeo-johnson')

train_df_transformed = train_df.copy()
train_df_transformed['y'] = pt.fit_transform(np.array(train_df['y']).reshape(-1,1))

model = Prophet(**hyperparams)
df_cv = cross_validation(model, initial="14 days", period='3 days', horizon='1 day', parallel="processes")
df_cv = inverse_transform(df_cv, pt, ['yhat','yhat_lower','yhat_upper'])
df_cv = pd.merge(df_cv.drop(columns=['y']),train_df, left_on='ds', right_on='ds')
df_p = performance_metrics(df_cv, metrics=['mae','mape'], rolling_window=1)
fig1 = plot_cross_validation_metric(df_cv, metric="mape")
fig2 = plot_cross_validation_metric(df_cv, metric="mae")