Machine Learning in iOS: Cats and Dogs
Hi! In this post, I explain how to develop an iOS application that can recognize if a picture is a cat or a dog. Firstly, I will build the model that can do the classification and finally, we develop an iOS application that can use this model.
We need a dataset of images. We can download from
A dataset is a set of images, in this case, a set of cats images and dogs images. From kaggle webpage we can download large datasets for this purpose.
Now we can build the model with Create ML Application. We can open it.
We create a new project in Create ML. We choose Image Classification.
We fill a project name and a description.
Now, we add a training data with the images download previously, and we add a test data. We have 8000 training data items and 2000 testing data items.
Now, we can train! Just click on train button. It takes ten minutes more or less.
That’s it! Now we can export the model in the Output tab, just click on Get button. Save mlmodel file.
Now, we will create an iOS 14 Application in SwiftUI.
Our idea is to develop this screen.
We need to add the mlmodel file. This is simple. We have a previewer inside of the xcode.
Firstly we create a ViewModel with the classificationRequest. In this ViewModel we have the request with the image and the result as a String value.
processObservations is a function that check the results of the classification.
Finally, we need a public method with the starting point to classify the image.
And that’s it! The rest of the code is so simple to understand.
- ImagePicker view that bridge UIKit picker with SwiftUI framework.
- ContentView is the main screen. It contains a image in the background, the results label on top and two buttons in the bottom.
To test this application is simple.
This kind of models has a problem. What happens if the image is not a dog or a cat? A random result appear. Ok, this is for another chapter.
Kind regards :-)
PD:: The repository of this project is here.