For some context: I am scripting the learning and inference process in a
large loop, as I have some 4500 individual time-series predictions to make.
(Kaggle Wallmart exercise for 9-months of weekly predictions for 99
departments in 45 stores)
On datasets with no/missing sales data, I impute zero values, so as to be
able to provide data to the learner within the loop. My hope was that, as
with LinearRegression and MultilayerPerceptron learners, it would just
result in zero-valued predictions, rather than falling over?
What I found with datasets with only a couple of entries near the end of the
training set, is that while trying to do evaluation with a dataset split (in
the Explorer), the learner would only see the same values in the training
portion of the split and would thus fail with the same complaint as above.
This does not cause any issues in the scripted runs though, because there it
does not reserve a portion of the data for evaluation and the full dataset
is available for training.