top of page

Day 35: Iterative Loop of ML development

In the next few sections, we'll explore the process of developing a ML system. Let's take a look at the iterative loop of ML development:

Error Analysis

In terms of the most important ways to run diagnostics to choose what to try next to improve the performance of your learning algorithm, bias and variance is probably the most important idea, followed by error analysis.

The error analysis process refers to manually looking through the misclassified examples and categorize them based on common traits. For example, you algorithm misclassified the following spam emails:

  • pharma spam (21)

  • deliberate mispellings (w4tches, etc.) (3)

  • unusual email routing (7)

  • phishing (18)

  • spam message in embedded imgs (5)

By manually examining a set of examples that your algorithm is misclassifying, it will give you an idea on what to try next and sometimes, it may also tell you that certain types of errors are sufficiently rare that they aren't worth as much of your time to try to fix.

One limitation of error analysis is that it's much easier to do for problems that humans are good at.


Adding Data

Instead of adding more data to everything, we may target adding more data of the types where error analysis has indicated it might help. e.g., go to unlabeled data and find more examples of pharma related spam.


Beyond getting more training examples, another widely used technique, especially for image and audio data, is to use data augmentation, in which we use existing training examples to create new training examples.


For example, given an image, we might try to create a new training example by rotating, shrinking, enlarging, etc to add more variability in the data. One tip for data augmentation is that the changes or distortions you make should be representative of the types of noise or distortions in the test set.


Data Synthesis is another way of adding data, in which you make brand new examples from scratch, using artificial data inputs to create new training examples.


For example, given a photo of Times Square in New York, you see building with texts, to create artificial data, you use a text file to create texts using fonts available to add more to your data.


Thanks to the paradigm of ML research today, most algorithm we have access to are already really good and will work well for many applications and it may be more fruitful to focus on taking a data centric approach, in which you spend more time on engineering the data used by your algorithm.


Transfer Learning

For an application where you don't have much data, transfer learning is a wonderful technique that lets you use data from a different task to help on your application.

The process of transfer learning goes something like this:

  • Download neural network parameters pre-trained on a large dataset with the same input type as your own application.

  • Fine-tune the network to fit your data, this may include only training the output layers parameters and freezing the parameters in the hidden layers, or training the entire network on your own data.


Recent Posts

See All

Day 39: Tree Ensembles

Using Multiple Decision Trees One of the weaknesses of using a single decision tree is that decision tree can be highly sensitive to...

Comments


bottom of page