Model

Training Data

  • Review how often your data sources are refreshed
  • Collect live data from users
  • Provide easy access to labels
  • Translate user needs into data needs
  • Only introduce new features when needed
  • Identify your data sources
  • Identify any outliers, and investigate whether they are actual outliers or due to errors in the data
  • Source your data responsibly
  • Design for raters and labeling
  • Split your data
  • Let raters change their minds
  • Evaluate rater tools
  • Consider missing or incomplete data
  • Consider unexpected input
  • Investigate rater context and incentives
  • Articulate your data sources
  • Beware of confirmation bias

Training Procedure

  • Design for experimentation
  • Inspect the features possible values, units, and data types
  • Evaluate the reward function outcomes
  • Weigh false positive & negative
  • Consider precision and recall tradeoffs
  • Balance underfitting and overfitting
  • Tune your model
  • Map existing workflows
  • Design and evaluate the reward function
  • Design for model tuning