INGREDINET: INTELLIGENT CNN FOR FOOD INGREDIENT RECOGNITION AND CLASSIFICATION

Authors

  • Mahnoor Zaman
  • Nosheen Fatima
  • Muhammad Sajid Maqool
  • Dr. Naeem Aslam
  • Rubaina Nazeer
  • Hira Saleem

Keywords:

CNN, Food Ingredient Classification, Deep Learning, Data Augmentation, Computer Vision, Image Processing, Artificial Intelligence, Image Classification, Food Recognition

Abstract

Accurate recognition of food ingredients is the basis of smart kitchen, nutritional management, automated inventory management, and intelligent food analysis. Traditional methods of identification are intensive in nature and based on manual inspection that is time consuming, labor intensive and prone to human errors. With the evolution of food technology, the need to have automated solutions that operate using vision is on the rise in order to provide accurate and scalable ingredient recognition. Although deep learning has been proposed as a highly successful method in general image classification problems, relatively little focus has been placed on designing lightweight and task-specific convolutional neural networks to perform multi-class ingredient recognition using a combination of authentic image data as the primary training objective. This project proposes IngrediNet, a specially designed CNN architecture that was designed to conduct effective and high-precision ingredient categorizing. It is estimated that the research will accomplish the following goals: (i) create and optimize a lightweight CNN, (ii) compare the performance of the model with the pretrained MobileNet model, and (iii) examine the effect of data augmentation on the model accuracy and generalization. The IngrediNet was trained on a multi-class image dataset of ingredients with two experimental conditions, in the case of original, unaugmented images and in the case of augmented data with random horizontal flipping, rotation, zoom, and contrast variation. Validation accuracy, loss curves, precision, recall, F1-score, and confusion matrix analysis were used to compare model performance in terms of overall performance and reliability by classes. It was compared to MobileNet to evaluate the performance, robustness and computing efficiency. The experimental findings indicate that IngrediNet has reached its highest validation rate of 99.84% when trained on original images which is marginally higher than at MobileNet where the validation rate stood at 98.87% under the same conditions. Interestingly, the two models demonstrated strong generalization on the validation data without data augmentation, which means that realistic images can be used to achieve the high-performance classification. Even though augmentation resulted in a slight decrease in the accuracy per peak (IngrediNet: 98.28%, MobileNet: 97.66%), it made the models more resistant to changes in orientation, scale, and lighting. Viewed as a computationally efficient and highly accurate engine of automated ingredient recognition, IngrediNet is suitable, in terms of its deployment, in resource-constrained, real-world environments, e.g., smart kitchens and mobile applications. As the results prove, a well-designed custom CNN can be as good or even better than the pretrained architectures and be less complex, resulting in the further development of practical AI-driven food analysis systems.

Downloads

Published

2026-03-31

How to Cite

Mahnoor Zaman, Nosheen Fatima, Muhammad Sajid Maqool, Dr. Naeem Aslam, Rubaina Nazeer, & Hira Saleem. (2026). INGREDINET: INTELLIGENT CNN FOR FOOD INGREDIENT RECOGNITION AND CLASSIFICATION. Policy Research Journal, 4(3), 789–805. Retrieved from https://policyrj.com/1/article/view/1704