
Courses
September 2023 - April 2024
Here is a list of the courses I completed during my first year of the Master's program in Engineering in Artificial Intelligence and Applied Mathematics at École Normale Supérieure Paris-Saclay, as part of the MVA (Mathematics, Vision, and Learning) Master's program. After two semesters, I successfully earned 50 credits, which were essential for the completion of the Master's, with a GPA of 87.8.

Convex Optimization
Credit : 5.0
Grade : 89
The Convex Optimization course covers several key mathematical concepts essential for understanding optimization problems. It begins with the study of convex sets and functions, focusing on their defining properties, such as the notion that a convex set is one where, for any two points within the set, the line segment connecting them lies entirely within the set. A convex function is one where the graph lies below any line segment connecting two points on the graph. The course then introduces the formulation of convex optimization problems, which involve minimizing a convex objective function subject to convex constraints, covering examples like linear programming and quadratic programming. A major focus is on duality theory, which includes deriving dual problems and understanding strong duality and Slater’s condition, which explain when the primal and dual solutions coincide. The optimality conditions, both first-order (Karush-Kuhn-Tucker conditions) and second-order, are explored in depth, providing the necessary and sufficient conditions for finding the optimal solution to convex problems. The course also covers a range of optimization algorithms, such as gradient descent for differentiable functions, Newton's method for more efficient optimization using second-order derivatives, and interior-point methods for solving large-scale problems, especially in linear and quadratic programming. Finally, the course delves into the application of convex optimization in machine learning, particularly in regularization techniques like L1 and L2 norms, which promote sparsity and smoothness in models. These mathematical concepts and algorithms form the basis for solving complex problems in fields such as machine learning, economics, signal processing, and control systems.
Topological Data Analysis
Credit : 5.0
Grade : 91
This course focuses on using topological methods to analyze complex, high-dimensional data. TDA provides tools to extract meaningful patterns from data by studying its shape and structure, particularly in cases where traditional methods may struggle. A key concept in TDA is persistent homology, which tracks the features of a dataset across different scales and helps identify significant structures such as clusters, holes, and voids. The course introduces simplicial complexes and filtrations, which are used to model data in a way that captures its topological features. Students learn how to compute topological summaries of data and interpret them in various applications, from neuroscience and biology to machine learning. The course also explores software tools and libraries, such as GUDHI and Ripser, for performing TDA on real datasets. By the end of the course, students are able to apply TDA to extract insights from complex datasets, enabling new approaches in data analysis and feature extraction.
Image and Video Processing, Computer Vision
Credit : 5.0
Grade : 90
This course focuses on advanced techniques in image and video processing with a strong emphasis on modern deep learning architectures and their applications in computer vision. Students explore various Convolutional Neural Networks (CNNs), including ResNet (Residual Networks), which introduce skip connections to address vanishing gradient problems, allowing for deeper networks. The course also covers other CNN architectures such as VGG and Inception. Moving into newer models, Transformers for computer vision, such as the Vision Transformer (ViT), are studied for their ability to capture long-range dependencies in images by applying attention mechanisms, typically used in NLP, to visual tasks. The course highlights important techniques for improving model performance, including data augmentation methods such as rotation, scaling, flipping, and color jittering for images, as well as temporal augmentations like frame sampling and jittering for videos. These augmentations help to create more diverse datasets and prevent overfitting by introducing variability during training. Students also examine object detection models like YOLO (You Only Look Once) and Faster R-CNN, as well as semantic segmentation using models like U-Net. In video processing, the course explores techniques for video classification and action recognition, integrating spatiotemporal data to capture motion and context across frames. Additionally, video object detection and tracking are discussed, emphasizing real-time applications. Students gain practical experience implementing these techniques using frameworks like TensorFlow, PyTorch, and OpenCV, applying them to tasks in fields such as autonomous driving, healthcare imaging, and video surveillance.
Machine Learning for Time Series
Credit : 5.0
Grade : 86
This course focuses on the application of machine learning techniques to analyze and predict time series data. Students learn how to handle sequential data that is indexed in time, such as stock prices, weather data, and sensor readings. The course covers key time series concepts, including stationarity, trend analysis, and seasonality, while addressing the challenges of modeling time-dependent data. Core topics include traditional time series forecasting methods like ARIMA (AutoRegressive Integrated Moving Average) and exponential smoothing, along with more advanced machine learning techniques. These include regression models, decision trees, random forests, and support vector machines tailored to time series analysis. The course emphasizes deep learning approaches, such as Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and Gated Recurrent Units (GRUs), which are specifically designed to capture long-term dependencies in sequential data. Students also explore transformer-based models for time series prediction, learning how attention mechanisms can enhance the performance of models by capturing intricate temporal patterns. Techniques for model validation and evaluation, such as cross-validation, backtesting, and error metrics (e.g., RMSE, MAE), are also covered to ensure robust model performance.
Deep Learning for Image Generation
Credit : 5.0
Grade : 84
This course explores the use of deep learning techniques for generating images, focusing on methods like Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). Students learn how GANs, which consist of a generator and discriminator, are trained to create realistic images by minimizing the difference between generated and real data. The course also covers the theory behind VAEs, which use probabilistic methods to model data distributions and generate new images by sampling from learned latent spaces. Additionally, techniques for improving image generation, such as conditional GANs and StyleGAN, are discussed, allowing for more control over the generated output. Students also explore practical applications, including image-to-image translation, super-resolution, and style transfer. Through hands-on projects, students gain experience in training and fine-tuning models for various image generation tasks, using deep learning frameworks such as TensorFlow and PyTorch.
Object Recognition
Credit : 5.0
Grade : 86
This course covers the fundamental techniques and algorithms used for object recognition in images and videos, a key area in computer vision. It starts with an introduction to classical approaches, such as feature extraction and the use of methods like Scale-Invariant Feature Transform (SIFT) and Histogram of Oriented Gradients (HOG). The course then moves into modern deep learning techniques, focusing on Convolutional Neural Networks (CNNs), which have become the backbone of object recognition tasks. Topics include CNN architectures (e.g., LeNet, AlexNet, VGG, ResNet) and how they are used to classify and localize objects within images. Students also explore the use of Region-based CNNs (R-CNN) for object detection and YOLO (You Only Look Once) for real-time object recognition. The course also covers challenges in object recognition, such as dealing with occlusions, variations in lighting, and object scaling. Students gain hands-on experience with training deep learning models on labeled image datasets and applying these models to real-world object recognition tasks, utilizing tools like TensorFlow and PyTorch.
Turing Seminar
Credit : 2.5
Grade : 92
The Turing Seminar focuses on exploring the future innovations and advancements in the field of Artificial Intelligence (AI). The course involves extensive research and analysis on cutting-edge AI technologies, with a focus on emerging trends and their potential impact on various industries. Students are tasked with writing a comprehensive report that delves into the anticipated breakthroughs in AI, such as advancements in machine learning algorithms, natural language processing, computer vision, and autonomous systems. The course encourages students to analyze the ethical and societal implications of these innovations, including concerns about AI governance, bias, and accountability. Additionally, students are expected to explore interdisciplinary approaches to AI, examining how AI intersects with fields like neuroscience, robotics, quantum computing, and biotechnology. Through literature reviews, discussions, and hands-on projects, students gain an in-depth understanding of the state-of-the-art research shaping the future of AI, while developing the skills necessary to critically evaluate and contribute to the ongoing evolution of the field. The final report presents an overview of the potential next steps in AI development, offering insights into the technological, economic, and ethical challenges that lie ahead.
Responsible Machine Learning
Credit : 5.0
Grade : 90
This course focuses on the ethical considerations and societal impacts of machine learning. It explores key topics such as bias, fairness, transparency, accountability, and privacy in ML models. Students learn how to identify and mitigate biases in training data and algorithms, ensuring that machine learning systems are fair and equitable. The course also covers the importance of interpretability in models, providing tools to make machine learning systems more understandable and transparent for users and stakeholders. Ethical challenges, including data privacy concerns and the responsible use of AI in various domains, are discussed through case studies and practical exercises. The course emphasizes the development of machine learning models that are both technically sound and socially responsible.
Ecology in Data Science
Credit : 2.5
Grade : 78
This course integrates ecological concepts with data science techniques to address environmental challenges. It focuses on how data science tools can be applied to ecological research, from biodiversity monitoring to ecosystem modeling. Topics include the analysis of ecological datasets, such as species distributions and population dynamics, using statistical and machine learning methods. Students learn about ecological models, such as species abundance models, and how to use data to predict environmental changes. The course also covers the use of remote sensing data and geographic information systems (GIS) for spatial analysis in ecology. Emphasis is placed on the development of data-driven solutions to manage natural resources, understand climate change, and promote biodiversity conservation. Through hands-on projects, students apply machine learning and data analysis techniques to real-world ecological problems.
Hospital Project in Collaboration with George Pompidou Hospital
Credit : 5.0
Grade : 96
This project, conducted in collaboration with the George Pompidou European Hospital, focuses on leveraging machine learning and data science techniques to improve hospital operations and decision-making. Specifically, the project aimed to develop a predictive model to enhance the multidisciplinary case review (RCP) process. The goal was to automate the classification and prediction of RCP report outcomes, using historical data collected from the hospital. This involved analyzing reports written by medical professionals, extracting relevant features, and training natural language processing (NLP) models to classify these reports into different categories and predict outcomes based on historical trends. The project also addressed challenges such as data anonymization, inconsistencies in report formatting, and handling incomplete or missing data. By developing a robust and reliable decision-support tool, the project sought to improve the efficiency and accuracy of medical decision-making, assisting healthcare providers in making better-informed choices.
Natural Language Programing and LLM
Credit : 5.0
Grade : 89
This course focuses on the fundamental techniques and applications of Natural Language Processing (NLP), with a deep dive into Large Language Models (LLMs) like GPT and BERT. Students explore core concepts of NLP, such as tokenization, part-of-speech tagging, named entity recognition, and more advanced topics like dependency parsing and sentiment analysis. A significant portion of the course is dedicated to understanding the inner workings of transformer architectures, particularly their attention mechanisms, which are key to enabling LLMs to process and generate human language with high efficiency. The course covers the self-attention mechanism in detail, explaining how it allows transformers to capture long-range dependencies in text by assigning different weights to each token in the input sequence. Students learn how transformers apply multi-head attention, enabling them to focus on different aspects of the input simultaneously, and how this contributes to better performance in tasks like machine translation and text generation. The mechanics of positional encoding and how transformers deal with the sequential nature of language are also explored. Additionally, the course dives into the training process of LLMs, including unsupervised learning and the fine-tuning techniques used to adapt pre-trained models to specific domains. Practical applications of LLMs are studied, such as text summarization, question answering, dialogue generation, and sentiment analysis. Emphasis is placed on understanding the computational complexity of training these models and the challenges related to bias in language models and ethical considerations. Hands-on experience is gained using frameworks like Hugging Face’s Transformers and TensorFlow, where students experiment with pre-trained models and fine-tune them for real-world tasks. The course provides a comprehensive understanding of the transformer model, its attention mechanisms, and their impact on modern NLP applications, offering insights into the current state of AI-driven language technologies and their future potential.