I am interested in how
frequently you see customers rebuilding/refreshing models they have deployed in
production? Are they using our C&DS automation to schedule these, and do
they do any refreshing of their models automatically, or do they have analysts
perform these manually?
This is what I see customers wanting:
-Champion challenger
-Self improving models, models learns as new
data comes in (ala our naive Bayes, 'self learning' model).
This is what I see customers doing:
-use CADS to store models.
-manually score
-sometime schedule score
-real time deployment.
-Never: champion challenger. Reason: a model
needs thorough checking when re-created. Interface in CADS is not on par.
-Never: refresh: Reason: a model needs
thorough checking. Refresh works well, but the storing and replacing the
model is too complex because it involves scripting and Modeler/CADS interplay.
This is what I customers want to do:
Analytical reporting:
-being able to set up an experiment in CADS to
keep track of model performance over the lifetime of the model.
-having a comprehensive (=prebuilt) and
configurable model evaluation dashboard.
Operational reporting:
-being able to see what the model
predicts (without knowing the outcomes yet, hence the operational reporting).
-having a comprehensive and configurable model
scoring dashboard.
In addition: having both abilities working if
one has many models in production in a convenient way. On top of this
massive model deployment, having a way to quickly get insight in the model
trends and being able to alert the worse performing models.
No comments:
Post a Comment