Leveraging the capabilities of Large Language Models (LLMs) promises a practical solution to the real-world challenge of giving personalized feedback in text and programming exercises within large educational settings. This research utilizes LLMs within Athena, a key part of the Artemis Learning Management System. The objective is to improve the quality and adaptability of automated feedback while directly supporting tutors in their assessment tasks. The core aim of this thesis is to enhance Artemis’s existing semi-automated assessment system for text exercises, currently facilitated by CoFee, by introducing LLMs into the process. Concurrently, this work introduces LLM-based automated feedback for programming exercises. Both initiatives are supported within a newly established research and development environment, designed to streamline the future creation of automated assessment approaches for Artemis within Athena. The methodology employs a two-stage approach: first, LLMs are utilized to generate automated feedback for text and programming exercises, followed by a comprehensive evaluation focusing on quality, cost, and response times. Initial results affirm that this integrated approach is promising, especially for text exercises, aligning well with educational objectives and offering substantial potential for future advances.