Multimodal AI Framework for Personalized and Health-Aware Cooking Recommendations
编号:9
访问权限:仅限参会人
更新:2025-11-04 14:05:01 浏览:111次
拓展类型2
摘要
In the current era of growing interest in health-conscious eating and personalized nutrition, traditional recipe recommendation systems often fail to account for diverse user needs, ingredient availability, and practical cooking constraints. The Multimodal Artificial Intelligence (AI) Framework proposed in this study integrates and analyzes multiple data modalities—textual dietary preferences, food images, and cooking videos—to generate personalized and health-aware cooking recommendations. The framework considers individual health profiles, ingredients detected from visual inputs, and user-specific cooking skill levels inferred from video analysis to tailor recipe suggestions effectively. By leveraging multimodal deep learning algorithms, the system delivers contextually aware, precise, and adaptive recommendations. Experimental evaluations on benchmark and hybrid datasets demonstrate its effectiveness in enhancing recommendation relevance, supporting dietary compliance, and improving overall user satisfaction. These results indicate strong potential for real-world deployment in intelligent culinary assistants, personalized diet planning platforms, and smart health applications.
关键词
Personalized Recipe Recommendation, Multimodal Deep Learning, Ingredient Recognition, Cooking Skill Estimation
稿件作者
Swarna Suganthi S
Velammal College of Engineering and Technology
Pooja Shree P
Velammal College of Engineering and Technology,Madurai
发表评论