List of Topics:
Location Research Breakthrough Possible @S-Logix pro@slogix.in

Office Address

Social List

How to Build an Image Classification Model with MobileNetV2 for Cat and Dog Images

Image Classification Model with MobileNetV2

Condition for Building an Image Classification Model with MobileNetV2 for Cat and Dog Images

  • Description:
    This code implements a binary image classification model using MobileNetV2 for classifying cat and dog images. It utilizes transfer learning by freezing the pre-trained layers of MobileNetV2 and adding custom dense layers for classification. The model is trained on the processed images, evaluated with performance metrics, and visualized with a confusion matrix.
Step-by-Step Process
  • Import Libraries:
    Import essential libraries like numpy, tensorflow, PIL, and sklearn for image processing and model building.
  • Load and Inspect Images:
    Load cat and dog images from directories, and visualize samples for inspection.
  • Preprocess Data:
    Resize images to 224x224 pixels, normalize pixel values, and encode labels.
  • Build and Train Model:
    Use MobileNetV2 as the base model, freeze pre-trained layers, and add custom dense layers for classification.
  • Evaluate and Visualize:
    Evaluate the model using test data, calculate performance metrics, and plot a confusion matrix.
Sample Source Code
  • # Import Libraries
    import numpy as np
    import os
    import random
    from PIL import Image
    import matplotlib.pyplot as plt
    from sklearn.model_selection import train_test_split
    from sklearn.metrics import classification_report, confusion_matrix
    from tensorflow.keras.models import Model
    from tensorflow.keras.layers import Input, Flatten, Dense
    from tensorflow.keras.applications import MobileNetV2

    # Paths to images
    cat_path = "/path/to/cat/images"
    dog_path = "/path/to/dog/images"
    cats = os.listdir(cat_path)
    dogs = os.listdir(dog_path)

    # Process Images
    def process_images(path, images):
    feature_list = []
    for img_file in images:
    img = Image.open(os.path.join(path, img_file))
    if img.mode != 'RGB':
    img = img.convert('RGB')
    img_resized = img.resize((224, 224))
    feature_list.append(np.array(img_resized))
    return np.array(feature_list)

    cat_features = process_images(cat_path, cats)
    dog_features = process_images(dog_path, dogs)

    # Combine Data
    X = np.concatenate((cat_features, dog_features), axis=0)
    y = np.array([0]*len(cat_features) + [1]*len(dog_features))

    # Train-Test Split
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

    # Build Model
    base_model = MobileNetV2(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
    base_model.trainable = False
    x = Flatten()(base_model.output)
    x = Dense(64, activation='relu')(x)
    x = Dense(1, activation='sigmoid')(x)
    model = Model(inputs=base_model.input, outputs=x)
    model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

    # Train Model
    history = model.fit(X_train, y_train, epochs=10, validation_data=(X_test, y_test))

    # Evaluate Model
    y_pred = model.predict(X_test)
    y_pred = [1 if pred > 0.5 else 0 for pred in y_pred]
    print(classification_report(y_test, y_pred))
    print(confusion_matrix(y_test, y_pred))
Screenshots
  • MobileNetV2 Output Screenshot